6 research outputs found

    A quantum view on convex optimization

    Get PDF
    In this dissertation we consider quantum algorithms for convex optimization. We start by considering a black-box setting of convex optimization. In this setting we show that quantum computers require exponentially fewer queries to a membership oracle for a convex set in order to implement a separation oracle for that set. We do so by proving that Jordan's quantum gradient algorithm can also be applied to find sub-gradients of convex Lipschitz functions, even though these functions might not even be differentiable. As a corollary we get a quadraticly faster algorithm for convex optimization using membership queries. As a second set of results we give sub-linear time quantum algorithms for semidefinite optimization by speeding up the iterations of the Arora-Kale algorithm. For the problem of finding approximate Nash equilibria for zero-sum games we then give specific algorithms that improve the error-dependence and only depend on the sparsity of the game, not it's size. These last results yield improved algorithms for linear programming as a corollary. We also show several lower bounds in these settings, matching the upper bounds in most or all parameters

    Improvements in Quantum SDP-Solving with Applications

    Get PDF
    Following the first paper on quantum algorithms for SDP-solving by Brandão and Svore [Brandão and Svore, 2017] in 2016, rapid developments have been made on quantum optimization algorithms. In this paper we improve and generalize all prior quantum algorithms for SDP-solving and give a simpler and unified framework. We take a new perspective on quantum SDP-solvers and introduce several new techniques. One of these is the quantum operator input model, which generalizes the different input models used in previous work, and essentially any other reasonable input model. This new model assumes that the input matrices are embedded in a block of a unitary operator. In this model we give a O~((sqrt{m}+sqrt{n}gamma)alpha gamma^4) algorithm, where n is the size of the matrices, m is the number of constraints, gamma is the reciprocal of the scale-invariant relative precision parameter, and alpha is a normalization factor of the input matrices. In particular for the standard sparse-matrix access, the above result gives a quantum algorithm where alpha=s. We also improve on recent results of Brandão et al. [Fernando G. S. L. Brandão et al., 2018], who consider the special case w

    Quantum algorithms for matrix scaling and matrix balancing

    Get PDF
    Matrix scaling and matrix balancing are two basic linear-algebraic problems with a wide variety of applications, such as approximating the permanent, and pre-conditioning linear systems to make them more numerically stable. We study the power and limitations of quantum algorithms for these problems. We provide quantum implementations of two classical (in both senses of the word) methods: Sinkhorn's algorithm for matrix scaling and Osborne's algorithm for matrix balancing. Using amplitude estimation as our main tool, our quantum implementations both run in time Õ(√mn/∈4) for scaling or balancing an n×n matrix (given by an oracle) with m non-zero entries to within ℓ1-error ∈. Their classical analogs use time Õ(m/∈2), and every classical algorithm for scaling or balancing with small constant ∈ requires Ω(m) queries to the entries of the input matrix. We thus achieve a polynomial speed-up in terms of n, at the expense of a worse polynomial dependence on the obtained ℓ1-error ∈. Even for constant ∈ these problems are already non-trivial (and relevant in applications). Along the way, we extend the classical analysis of Sinkhorn's and Osborne's algorithm to allow for errors in the computation of marginals. We also adapt an improved analysis of Sinkhorn's algorithm for entrywise-positive matrices to the ℓ1-setting, obtaining an Õ(n1.5/∈3)-time quantum algorithm for ∈-ℓ1-scaling. We also prove a lower bound, showing our quantum algorithm for matrix scaling is essentially optimal for constant ∈: every quantum algorithm for matrix scaling that achieves a constant ℓ1-error w.r.t. uniform marginals needs Ω(√ mn) queries

    A quantum view on convex optimization

    No full text
    In this dissertation we consider quantum algorithms for convex optimization. We start by considering a black-box setting of convex optimization. In this setting we show that quantum computers require exponentially fewer queries to a membership oracle for a convex set in order to implement a separation oracle for that set. We do so by proving that Jordan's quantum gradient algorithm can also be applied to find sub-gradients of convex Lipschitz functions, even though these functions might not even be differentiable. As a corollary we get a quadraticly faster algorithm for convex optimization using membership queries. As a second set of results we give sub-linear time quantum algorithms for semidefinite optimization by speeding up the iterations of the Arora-Kale algorithm. For the problem of finding approximate Nash equilibria for zero-sum games we then give specific algorithms that improve the error-dependence and only depend on the sparsity of the game, not it's size. These last results yield improved algorithms for linear programming as a corollary. We also show several lower bounds in these settings, matching the upper bounds in most or all parameters
    corecore